Accelerated Backpropagation Learning : Parallel Tangent Optimization Algorithm

نویسندگان

  • Ali A. Ghorbani
  • Virendra C. Bhavsar
چکیده

A modi ed backpropagation learning algorithm for training arti cial neural networks using de ecting gradient technique, which may be considered as a special case of the conjugate gradient methods, is proposed. Parallel tangent(Partan) gradient is used as an alternative for momentum term to accelerate the convergence. Partan gradient consists of two phases namely, climbing through gradient and accelerating through parallel tangent. Partan over-

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Novel Fast Backpropagation Learning Algorithm Using Parallel Tangent and Heuristic Line Search

In gradient based learning algorithms, the momentum has usually an improving effect in convergence rate and decreasing the zigzagging phenomena. However it sometimes causes the convergence rate to decrease. The Parallel Tangent (ParTan) gradient is used as deflecting method to improve the convergence. From the implementation point of view, it is as simple as the momentum. In fact this method is...

متن کامل

Backpropagation training in adaptive quantum networks

We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or adaptive quantum networks. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate lin...

متن کامل

Accelerated Backpropagation Learning: Two Optimization Methods

Two methods for incr easing performance of th e backpropagat ion learning algorithm are present ed and their result s are compared with those obtained by optimi zing par ameters in the standard method . The first method requires adaptation of a scalar learning rat e in order to decrease th e energy value along the gradient direction in a close-to-optimal way. Th e second is derived from the con...

متن کامل

Revisit Long Short-Term Memory: An Optimization Perspective

Long Short-Term Memory (LSTM) is a deep recurrent neural network architecture with high computational complexity. Contrary to the standard practice to train LSTM online with stochastic gradient descent (SGD) methods, we propose a matrix-based batch learning method for LSTM with full Backpropagation Through Time (BPTT). We further solve the state drifting issues as well as improving the overall ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1983